37 research outputs found

    How Blockchain, Virtual Reality and Augmented Reality are converging, and why

    Get PDF
    Nowadays, breakthrough technologies, such as virtual reality (VR), augmented reality (AR), and Blockchain, have definitively attracted the attention of a huge number of investors worldwide. Although, at first glance, Blockchain (traditionally used for financial services) seems to have little to none to share with VR and AR (originally adopted for entertainment), in the last few years several use cases started to appear showing effective ways to integrate these technologies. In this article, an overview of opportunities investigated by current solutions combining VR, AR, and Blockchain will be discussed, showing how they allowed both companies and academic researchers cope with issues affecting traditional services and products in a rather heterogenous set of application domains. Opportunities that could foster the convergence of these technologies and boost them further are also discussed

    Is immersive virtual reality the ultimate interface for 3D animators?

    Get PDF
    Creating computer animations is a labor-intensive task. Existing virtual reality (VR)-based animation solutions offer only heterogeneous subsets of traditional tools' functionalities. We present an add-on for the Blender animation suite that enables users to switch between native and immersive VR-based interfaces and employ the latter to perform a representative set of tasks

    Improving AR-powered remote assistance: A new approach aimed to foster operator’s autonomy and optimize the use of skilled resources

    Get PDF
    Augmented Reality (AR) has a number of applications in industry, but remote assistance represents one of the most prominent and widely studied use cases. Notwithstanding, although the set of functionalities supporting the communication between remote experts and on-site operators grew over time, the way in which remote assistance is delivered has not evolved yet to unleash the full potential of AR technology. The expert typically guides the operator step-by-step, and basically uses AR-based hints to visually support voice instructions. With this approach, skilled human resources may go under-utilized, as the time an expert invests in the assistance corresponds to the time needed by the operator to execute the requested operations. The goal of this work is to introduce a new approach to remote assistance that takes advantage of AR functionalities separately proposed in academic works and commercial products to re-organize the guidance workflow, with the aim to increase the operator's autonomy and, thus, optimize the use of expert's time. An AR-powered remote assistance platform able to support the devised approach is also presented. By means of a user study, this approach was compared to traditional step-by-step guidance, with the aim to estimate what is the potential of AR that is still unexploited. Results showed that with the new approach it is possible to reduce the time investment for the expert, allowing the operator to autonomously complete the assigned tasks in a time comparable to step-by-step guidance with a negligible need for further support

    Evaluating consumer interaction interfaces for 3D sketching in virtual reality

    Get PDF
    Since its introduction, 3D mid-air sketching in immersive Virtual Reality (VR) proved to be a very powerful tool for many creative applications. However, common VR sketching suites rely on the standard hand controllers bundled with home VR systems, which are non-optimal for this kind of tasks. To deal with this issue, some research works proposed to use dedicated pen-shaped interfaces tracked with external motion-capture systems. Regrettably, these solutions are generally rather expensive, cumbersome and unsuitable for many potential end- users. Hence, lots of challenges regarding interfaces for 3D sketching in VR still exist. In this paper, a newly proposed sketching-oriented input device (namely, a VR stylus) compatible with the tracking technology of a consumer-grade VR system is compared with a standard hand con- troller from the same system. In particular, the paper reports the results of a user study whose aim was to evaluate, in both objective and subjective terms, aspects like, among others, sketching accuracy, ease of use, efficiency, comfort, control and naturalness

    User perception of robot's role in floor projection-based Mixed-Reality robotic games

    Get PDF
    Within the emerging research area represented by robotic gaming and, specifically, in application domains in which the recent literature suggests to combine commercial off-the-shelf (COTS) robots and projected mixed reality (MR) technology in order to develop engaging games, one of the crucial issues to consider in the design process is how to make the player perceive the robot as having a key role, i.e., to valorize its presence from the user experience point of view. By moving from this consideration, this paper reports efforts that are being carried out with the aim to investigate the impact of diverse game design choices in the above perspective, while at the same time extracting preliminary insights that can be exploited to orient further research in the field of MR-based robotic gaming and related scenarios

    Comparing algorithms for aggressive driving event detection based on vehicle motion data

    Get PDF
    Aggressive driving is one of the main causes of fatal crashes. Correctly identifying aggressive driving events still represents a challenge in the literature. Furthermore, datasets available for testing the proposed approaches have some limitations since they generally (a) include only a few types of events, (b) contain data collected with only one device, and (c) are generated in drives that did not fully consider the variety of road characteristics and/or driving conditions. The main objective of this work is to compare the performance of several state-of-the-art algorithms for aggressive driving event detection (belonging to anomaly detection-, threshold- and machine learning-based categories) on multiple datasets containing sensors data collected with different devices (black-boxes and smartphones), on different vehicles and in different locations. A secondary objective is to verify whether smartphones could replace black-boxes in aggressive/non-aggressive classification tasks. To this aim, we propose the AD 2 (Aggressive Driving Detection) dataset, which contains (i) data collected using multiple devices to evaluate their influence on the algorithm performance, (ii) geographical data useful to analyze the context in which the events occurred, (iii) events recorded in different situations, and (iv) events generated by traveling the same path with aggressive and non-aggressive driving styles, in order to possibly separate the effects of driving style from those of road characteristics. Our experimental results highlighted the superiority of machine learning-based approaches and underlined the ability of smartphones to ensure a level of performance similar to that of black-boxes

    An evaluation testbed for locomotion in virtual reality

    Get PDF
    A common operation performed in Virtual Reality (VR) environments is locomotion. Although real walking can represent a natural and intuitive way to manage displacements in such environments, its use is generally limited by the size of the area tracked by the VR system (typically, the size of a room) or requires expensive technologies to cover particularly extended settings. A number of approaches have been proposed to enable effective explorations in VR, each characterized by different hardware requirements and costs, and capable to provide different levels of usability and performance. However, the lack of a well-defined methodology for assessing and comparing available approaches makes it difficult to identify, among the various alternatives, the best solutions for selected application domains. To deal with this issue, this paper introduces a novel evaluation testbed which, by building on the outcomes of many separate works reported in the literature, aims to support a comprehensive analysis of the considered design space. An experimental protocol for collecting objective and subjective measures is proposed, together with a scoring system able to rank locomotion approaches based on a weighted set of requirements. Testbed usage is illustrated in a use case requesting to select the technique to adopt in a given application scenario

    Comparing state-of-the-art and emerging augmented reality interfaces for autonomous vehicle-to-pedestrian communication

    Get PDF
    In the last few years, a considerable literature has grown around the theme of how to provide pedestrians and other vulnerable road users (VRUs) with a clear indication about a fully autonomous vehicle (FAV)'s status and intentions, which is crucial to make FAVs and VRUs coexist. So far, a variety of external interfaces leveraging different paradigms and technologies have been created. Proposed designs include vehicle-mounted devices (like LED panels), short-range on-road projection, and road infrastructure interfaces (e.g., special asphalts with embedded displays). These designs have been experimented in different settings, using mockups, specially prepared vehicles, or virtual environments, with heterogeneous evaluation metrics. Some promising interfaces based on Augmented Reality (AR) have been proposed too, but their usability and effectiveness have not been tested yet. This paper aims to complement such body of literature by presenting a comparison of state-of-the-art interfaces and new designs under common conditions. To this aim, an immersive Virtual Reality-based simulation was developed, recreating a well-known scenario used in previous works represented by pedestrian crossing in urban environments under non-regulated conditions. A user study was then performed to investigate the various dimensions of vehicle-to-pedestrian interaction in both objective and subjective terms. Results showed that, although an interface clearly standing out over all the considered dimensions does not exists, one of the studied AR designs was able to provide state-of-the-art results in terms of safety and trust, at the cost of a higher cognitive effort and lower intuitiveness compared to LED panels showing anthropomorphic features. Together with rankings on the various dimensions, indications about advantages and drawbacks of the various alternatives that emerged from the study could be an important information source for next developments in the field

    Automatic generation of affective 3D virtual environments from 2D images

    Get PDF
    Today, a wide range of domains encompassing, e.g., movie and video game production, virtual reality simulations, augmented reality applications, make a massive use of 3D computer generated assets. Although many graphics suites already offer a large set of tools and functionalities to manage the creation of such contents, they are usually characterized by a steep learning curve. This aspect could make it difficult for non-expert users to create 3D scenes for, e.g., sharing their ideas or for prototyping purposes. This paper presents a computer-based system that is able to generate a possible reconstruction of a 3D scene depicted in a 2D image, by inferring objects, materials, textures, lights, and camera required for rendering. The integration of the proposed system into a well known graphics suite enables further refinements of the generated scene using traditional techniques. Moreover, the system allows the users to explore the scene into an immersive virtual environment for better understanding the current objects’ layout, and provides the possibility to convey emotions through specific aspects of the generated scene. The paper also reports the results of a user study that was carried out to evaluate the usability of the proposed system from different perspectives

    HandPainter – 3D sketching in VR with hand-based physical proxy

    Get PDF
    3D sketching in virtual reality (VR) enables users to create 3D virtual objects intuitively and immersively. However, previous studies showed that mid-air drawing may lead to inaccurate sketches. To address this issue, we propose to use one hand as a canvas proxy and the index finger of the other hand as a 3D pen. To this end, we first perform a formative study to compare two-handed interaction with tablet-pen interaction for VR sketching. Based on the findings of this study, we design HandPainter, a VR sketching system which focuses on the direct use of two hands for 3D sketching without requesting any tablet, pen, or VR controller. Our implementation is based on a pair of VR gloves, which provide hand tracking and gesture capture. We devise a set of intuitive gestures to control various functionalities required during 3D sketching, such as canvas panning and drawing positioning. We show the effectiveness of HandPainter by presenting a number of sketching results and discussing the outcomes of a user study-based comparison with mid-air drawing and tablet-based sketching tools
    corecore